Increased-Confidence Adversarial Examples for Deep Learning Counter-Forensics

نویسندگان

چکیده

Transferability of adversarial examples is a key issue to apply this kind attacks against multimedia forensics (MMF) techniques based on Deep Learning (DL) in real-life setting. Adversarial example transferability, fact, would open the way deployment successful counter also cases where attacker does not have full knowledge to-be-attacked system. Some preliminary works shown that CNN-based image detectors are general non-transferrable, at least when basic versions implemented most popular libraries adopted. In paper, we introduce strategy increase strength and evaluate their transferability such varies. We experimentally show that, way, attack can be largely increased, expense larger distortion. Our research confirms security threats posed by existence even scenarios, thus calling for new defense strategies improve DL-based MMF techniques.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Adversarial Examples: Attacks and Defenses for Deep Learning

With rapid progress and great successes in a wide spectrum of applications, deep learning is being applied in many safety-critical environments. However, deep neural networks have been recently found vulnerable to well-designed input samples, called adversarial examples. Adversarial examples are imperceptible to human but can easily fool deep neural networks in the testing/deploying stage. The ...

متن کامل

Predicting Adversarial Examples with High Confidence

It has been suggested that adversarial examples cause deep learning models to make incorrect predictions with high confidence. In this work, we take the opposite stance: an overly confident model is more likely to be vulnerable to adversarial examples. This work is one of the most proactive approaches taken to date, as we link robustness with non-calibrated model confidence on noisy images, pro...

متن کامل

High Dimensional Spaces, Deep Learning and Adversarial Examples

In this paper, we analyze deep learning from a mathematical point of view and derive several novel results. The results are based on intriguing mathematical properties of high dimensional spaces. We first look at perturbation based adversarial examples and show how they can be understood using topological arguments in high dimensions. We point out fallacy in an argument presented in a published...

متن کامل

Detecting Adversarial Examples - A Lesson from Multimedia Forensics

Adversarial classification is the task of performing robust classification in the presence of a strategic attacker. Originating from information hiding and multimedia forensics, adversarial classification recently received a lot of attention in a broader security context. In the domain of machine learningbased image classification, adversarial classification can be interpreted as detecting so-c...

متن کامل

Counter-Forensics: Attacking Image Forensics

This chapter discusses counter-forensics, the art and science of impeding or misleading forensic analyses of digital images. Research on counter-forensics is motivated by the need to assess and improve the reliability of forensic methods in situations where intelligent adversaries make efforts to induce a certain outcome of forensic analyses. Counter-forensics is first defined in a formal decis...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Lecture Notes in Computer Science

سال: 2021

ISSN: ['1611-3349', '0302-9743']

DOI: https://doi.org/10.1007/978-3-030-68780-9_34